improving multilayer back propagation neural networks by using variable learning rate and automata theory and determining optimum learning rate

نویسندگان

محمدرضا جعفریان

علیرضا عباس زاده

چکیده

multilayer bach propagation neural networks have been considered by researchers. despite their outstanding success in managing contact between input and output, they have had several drawbacks. for example the time needed for the training of these neural networks is long, and some times not to be teachable. the reason for this long time of teaching is due to the selection unsuitable network parameters. the method for obtaining the network parameters of bias and weight is using is using gradient network energy function. as we know, network error function is of a not flat level, so the network is stopped at some optimum local points and we have no instruction at this points. to compensate for returned algorithm drawbacks, we use adaptive variable learning rate to enhance learning rate and in order to avoiding our network trapping in local points we used automata algorithm. by using this method it is possible to obtain improved learning rate for different net works.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improving Error Back Propagation Algorithm by using Cross Entropy Error Function and Adaptive Learning Rate

Improving the efficiency and convergence rate of the Multilayer Backpropagation Neural Network Algorithms is an important area of research. The last researches have witnessed an increasing attention to entropy based criteria in adaptive systems. Several principles were proposed based on the maximization or minimization of cross entropy function. One way of entropy criteria in learning systems i...

متن کامل

Multiagent learning using a variable learning rate

Learning to act in a multiagent environment is a difficult problem since the normal definition of an optimal policy no longer applies. The optimal policy at any moment depends on the policies of the other agents. This creates a situation of learning a moving target. Previous learning algorithms have one of two shortcomings depending on their approach. They either converge to a policy that may n...

متن کامل

Improving the Neural Network Training for Face Recognition using Adaptive Learning Rate, Resilient Back Propagation and Conjugate Gradient Algorithm

Face recognition is a method for verifying or identifying a person from a digital image. In this paper an approach for classifying images based on discrete wavelet transform (DWT) and neural network (NN) has been suggested. In the proposed approach, DWT decomposes an image into images with different frequency bands. An NN is a trainable and dynamic system which can acceptably estimate input-out...

متن کامل

A Differential Adaptive Learning Rate Method for Back-Propagation Neural Networks

In this paper a high speed learning method using differential adaptive learning rate (DALRM) is proposed. Comparison of this method with other methods such as standard BP, Nguyen-Widrow weight Initialization and Optical BP shows that the network’s learning speed has highly increased. Learning often takes a long time to converge and it may fall into local minimas. One way of escaping from local ...

متن کامل

Numerical Study of Back-Propagation Learning Algorithms for Multilayer Networks

A back-propagation learning algorithm is examined numerically for feedforward multilayer networks with one-hidden-layer functions as a parity machine or as a committee machine of the internal representation of the hidden units. I t is found that the maximal known theoretical capacity is saturated and that the convergent time is not exponential with the size of the system. The results also indic...

متن کامل

Adaptive Back-Propagation in On-Line Learning of Multilayer Networks

An adaptive back-propagation algorithm is studied and compared with gradient descent (standard back-propagation) for on-line learning in two-layer neural networks with an arbitrary number of hidden units. Within a statistical mechanics framework , both numerical studies and a rigorous analysis show that the adaptive back-propagation method results in faster training by breaking the symmetry bet...

متن کامل

منابع من

با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید


عنوان ژورنال:
مهندسی سازه

جلد ۵، شماره ۶، صفحات ۳۱-۴۰

کلمات کلیدی

میزبانی شده توسط پلتفرم ابری doprax.com

copyright © 2015-2023